64 research outputs found
Accurate detection of spiking motifs in multi-unit raster plots
Recently, interest has grown in exploring the hypothesis that neural activity
conveys information through precise spiking motifs. To investigate this
phenomenon, various algorithms have been proposed to detect such motifs in
Single Unit Activity (SUA) recorded from populations of neurons. In this study,
we present a novel detection model based on the inversion of a generative model
of raster plot synthesis. Using this generative model, we derive an optimal
detection procedure that takes the form of logistic regression combined with
temporal convolution. A key advantage of this model is its differentiability,
which allows us to formulate a supervised learning approach using a gradient
descent on the binary cross-entropy loss. To assess the model's ability to
detect spiking motifs in synthetic data, we first perform numerical
evaluations. This analysis highlights the advantages of using spiking motifs
over traditional firing rate based population codes. We then successfully
demonstrate that our learning method can recover synthetically generated
spiking motifs, indicating its potential for further applications. In the
future, we aim to extend this method to real neurobiological data, where the
ground truth is unknown, to explore and detect spiking motifs in a more natural
and biologically relevant context
Biologically Inspired Dynamic Textures for Probing Motion Perception
Perception is often described as a predictive process based on an optimal
inference with respect to a generative model. We study here the principled
construction of a generative model specifically crafted to probe motion
perception. In that context, we first provide an axiomatic, biologically-driven
derivation of the model. This model synthesizes random dynamic textures which
are defined by stationary Gaussian distributions obtained by the random
aggregation of warped patterns. Importantly, we show that this model can
equivalently be described as a stochastic partial differential equation. Using
this characterization of motion in images, it allows us to recast motion-energy
models into a principled Bayesian inference framework. Finally, we apply these
textures in order to psychophysically probe speed perception in humans. In this
framework, while the likelihood is derived from the generative model, the prior
is estimated from the observed results and accounts for the perceptual bias in
a principled fashion.Comment: Twenty-ninth Annual Conference on Neural Information Processing
Systems (NIPS), Dec 2015, Montreal, Canad
Edge co-occurrences can account for rapid categorization of natural versus animal images
International audienceMaking a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the "association field" for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category
Role of homeostasis in learning sparse representations
Neurons in the input layer of primary visual cortex in primates develop
edge-like receptive fields. One approach to understanding the emergence of this
response is to state that neural activity has to efficiently represent sensory
data with respect to the statistics of natural scenes. Furthermore, it is
believed that such an efficient coding is achieved using a competition across
neurons so as to generate a sparse representation, that is, where a relatively
small number of neurons are simultaneously active. Indeed, different models of
sparse coding, coupled with Hebbian learning and homeostasis, have been
proposed that successfully match the observed emergent response. However, the
specific role of homeostasis in learning such sparse representations is still
largely unknown. By quantitatively assessing the efficiency of the neural
representation during learning, we derive a cooperative homeostasis mechanism
that optimally tunes the competition between neurons within the sparse coding
algorithm. We apply this homeostasis while learning small patches taken from
natural images and compare its efficiency with state-of-the-art algorithms.
Results show that while different sparse coding algorithms give similar coding
results, the homeostasis provides an optimal balance for the representation of
natural images within the population of neurons. Competition in sparse coding
is optimized when it is fair. By contributing to optimizing statistical
competition across neurons, homeostasis is crucial in providing a more
efficient solution to the emergence of independent components
Motion clouds: model-based stimulus synthesis of natural-like random textures for the study of motion perception
Choosing an appropriate set of stimuli is essential to characterize the
response of a sensory system to a particular functional dimension, such as the
eye movement following the motion of a visual scene. Here, we describe a
framework to generate random texture movies with controlled information
content, i.e., Motion Clouds. These stimuli are defined using a generative
model that is based on controlled experimental parametrization. We show that
Motion Clouds correspond to dense mixing of localized moving gratings with
random positions. Their global envelope is similar to natural-like stimulation
with an approximate full-field translation corresponding to a retinal slip. We
describe the construction of these stimuli mathematically and propose an
open-source Python-based implementation. Examples of the use of this framework
are shown. We also propose extensions to other modalities such as color vision,
touch, and audition
From the retina to action: Dynamics of predictive processing in the visual system
International audienceWithin the central nervous system, visual areas are essential in transforming the raw luminous signal into a representation which efficiently conveys information about the environment. This process is constrained by the necessity of being robust and rapid. Indeed, there exists both a wide variety of potential changes in the geometrical characteristics of the visual scene and also a necessity to be able to respond as quickly as possible to the incoming sensory stream, for instance to drive a movement of the eyes to the location of a potential danger. Decades of study in neurophysiology and psychophysics at the different levels of vision have shown that this system takes advantage of a priori knowledge about the structure of visual information, such as the regularity in the shape and motion of visual objects. As such, the predictive processing framework offers a unified theory to explain a variety of visual mechanisms. However, we still lack a global normative approach unifying those mechanisms and we will review here some recent and promising approaches. First, we will describe Active Inference, a form of predictive processing equipped with the ability to actively sample the visual space. Then, we will extend this paradigm to the case where information is distributed on a topography, such as is the case for retinotopically organized visual areas. In particular, we will compare such models in light of recent neurophysiological data showing the role of traveling waves in shaping visual processing. Finally, we will propose some lines of research to understand how these functional models may be implemented at the neural level. In particular, we will review potential models of cortical processing in terms of prototypical micro-circuits. These allow to separate the different flows of information, from feed-forward prediction error to feed-back anticipation error. Still, the design of such a generic predictive processing circuit is still not fully understood and we will enumerate some possible implementations using biomimetic neural networks. Vision Perception Dynamics Delays Topography Spiking Neural Networks Bayesian Model Active Inferenc
An Adaptive Homeostatic Algorithm for the Unsupervised Learning of Visual Features
International audienceThe formation of structure in the visual system, that is, of the connections between cells within neural populations, is by and large an unsupervised learning process. In the primary visual cortex of mammals, for example, one can observe during development the formation of cells selective to localized, oriented features, which results in the development of a representation in area V1 of images' edges. This can be modeled using a sparse Hebbian learning algorithms which alternate a coding step to encode the information with a learning step to find the proper encoder. A major difficulty of such algorithms is the joint problem of finding a good representation while knowing immature encoders, and to learn good encoders with a nonoptimal representation. To solve this problem, this work introduces a new regulation process between learning and coding which is motivated by the homeostasis processes observed in biology. Such an optimal homeostasis rule is implemented by including an adaptation mechanism based on nonlinear functions that balance the antagonistic processes that occur at the coding and learning time scales. It is compatible with a neuromimetic architecture and allows for a more efficient emergence of localized filters sensitive to orientation. In addition, this homeostasis rule is simplified by implementing a simple heuristic on the probability of activation of neurons. Compared to the optimal homeostasis rule, numerical simulations show that this heuristic allows to implement a faster unsupervised learning algorithm while retaining much of its effectiveness. These results demonstrate the potential application of such a strategy in machine learning and this is illustrated by showing the effect of homeostasis in the emergence of edge-like filters for a convolutional neural network
- …